learning network
Learning Networks from Wide-Sense Stationary Stochastic Processes
Rayas, Anirudh, Cheng, Jiajun, Anguluri, Rajasekhar, Deka, Deepjyoti, Dasarathy, Gautam
Complex networked systems driven by latent inputs are common in fields like neuroscience, finance, and engineering. A key inference problem here is to learn edge connectivity from node outputs (potentials). We focus on systems governed by steady-state linear conservation laws: $X_t = {L^{\ast}}Y_{t}$, where $X_t, Y_t \in \mathbb{R}^p$ denote inputs and potentials, respectively, and the sparsity pattern of the $p \times p$ Laplacian $L^{\ast}$ encodes the edge structure. Assuming $X_t$ to be a wide-sense stationary stochastic process with a known spectral density matrix, we learn the support of $L^{\ast}$ from temporally correlated samples of $Y_t$ via an $\ell_1$-regularized Whittle's maximum likelihood estimator (MLE). The regularization is particularly useful for learning large-scale networks in the high-dimensional setting where the network size $p$ significantly exceeds the number of samples $n$. We show that the MLE problem is strictly convex, admitting a unique solution. Under a novel mutual incoherence condition and certain sufficient conditions on $(n, p, d)$, we show that the ML estimate recovers the sparsity pattern of $L^\ast$ with high probability, where $d$ is the maximum degree of the graph underlying $L^{\ast}$. We provide recovery guarantees for $L^\ast$ in element-wise maximum, Frobenius, and operator norms. Finally, we complement our theoretical results with several simulation studies on synthetic and benchmark datasets, including engineered systems (power and water networks), and real-world datasets from neural systems (such as the human brain).
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- North America > United States > Montana (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- (4 more...)
- Health & Medicine > Therapeutic Area > Neurology (1.00)
- Energy > Power Industry (1.00)
- Government > Regional Government > North America Government > United States Government (0.45)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Mathematical & Statistical Methods (0.85)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Bayesian Inference (0.34)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (0.34)
Classification of sleep stages from EEG, EOG and EMG signals by SSNet
Almutairi, Haifa, Hassan, Ghulam Mubashar, Datta, Amitava
Classification of sleep stages plays an essential role in diagnosing sleep-related diseases including Sleep Disorder Breathing (SDB) disease. In this study, we propose an end-to-end deep learning architecture, named SSNet, which comprises of two deep learning networks based on Convolutional Neuron Networks (CNN) and Long Short Term Memory (LSTM). Both deep learning networks extract features from the combination of Electrooculogram (EOG), Electroencephalogram (EEG), and Electromyogram (EMG) signals, as each signal has distinct features that help in the classification of sleep stages. The features produced by the two-deep learning networks are concatenated to pass to the fully connected layer for the classification. The performance of our proposed model is evaluated by using two public datasets Sleep-EDF Expanded dataset and ISRUC-Sleep dataset. The accuracy and Kappa coefficient are 96.36% and 93.40% respectively, for classifying three classes of sleep stages using Sleep-EDF Expanded dataset. Whereas, the accuracy and Kappa coefficient are 96.57% and 83.05% respectively for five classes of sleep stages using Sleep-EDF Expanded dataset. Our model achieves the best performance in classifying sleep stages when compared with the state-of-the-art techniques.
- North America > United States (0.14)
- Asia > Middle East > Israel > Haifa District > Haifa (0.04)
- Oceania > Australia > Western Australia (0.04)
- Europe > Portugal > Coimbra > Coimbra (0.04)
- Research Report > Promising Solution (0.67)
- Research Report > New Finding (0.66)
- Health & Medicine > Therapeutic Area > Sleep (1.00)
- Health & Medicine > Therapeutic Area > Neurology (1.00)
- Health & Medicine > Therapeutic Area > Cardiology/Vascular Diseases (1.00)
Learning Networks of Stochastic Differential Equations
We consider linear models for stochastic dynamics. Any such model can be associated a network (namely a directed graph) describing which degrees of freedom interact under the dynamics. We tackle the problem of learning such a network from observation of the system trajectory over a time interval T. We analyse the l1-regularized least squares algorithm and, in the setting in which the underlying network is sparse, we prove performance guarantees that are uniform in the sampling rate as long as this is sufficiently high.
Bigger Is Not Better: Why A Complex Deep Learning Network Is Often Worse than a Simple One for Business Problems
Artificial intelligence (AI) is rapidly advancing in the business world, with an increasing number of companies employing deep learning networks to improve their operations. However, it may come as a surprise that more complex and sophisticated deep learning models may not necessarily be better suited for solving business problems. In fact, in many cases, deploying a simpler network can yield more effective results. In this blog post, we'll explore why complex deep learning networks can be inefficient and even detrimental when applied to business scenarios. In my experience, one of the biggest challenges with deep learning networks is obtaining enough training data to achieve accurate results.
Forecasting with Deep Learning
This paper presents a method for time series forecasting with deep learning and its assessment on two datasets. The method starts with data preparation, followed by model training and evaluation. The final step is a visual inspection. Experimental work demonstrates that a single time series can be used to train deep learning networks if time series in a dataset contain patterns that repeat even with a certain variation. However, for less structured time series such as stock market closing prices, the networks perform just like a baseline that repeats the last observed value. The implementation of the method as well as the experiments are open-source.
- North America > Trinidad and Tobago > Trinidad > Arima > Arima (0.06)
- North America > Costa Rica > Heredia Province > Heredia (0.05)
- Europe > Denmark > North Jutland > Aalborg (0.05)
Regression Analysis Is Exceedingly Difficult: How to Master It Without Coding
Regression analysis is a technique that can be used to [10] predict future outcomes of use cases. In machine learning, regression analysis is particularly useful when training models on large data sets. To achieve measurable outputs, we use historical data for prediction. Regression analysis is a complex technique, and there are many ways to perform it. Here, I will go over the basics of regression analysis using a simple example.
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
Deep Learning Network Types: What You Should Know
Deep learning networks (DLN) are a type of neural network that can learn to recognize patterns in large data sets and perform complex tasks. Neural networks and deep learning are similar but have more layers of hidden units that allow them to extract more features from data. They are also often trained using less human supervision techniques than traditional neural networks. This article will discover the types of deep learning networks and why they are essential for your personal or professional life. A deep learning network is a type of neural network that trains computers to recognize objects, words, and other patterns.
Beginning by hacking Tesla .. Is the world witnessing a global war for artificial intelligence?
At the height of the exchange of accusations between the United States and China regarding the "Covid-19" disease, new signs of a war between the two countries appeared, the Artificial intelligence War, which lead us to ask: Is this technology ready to work in safety? And can military AI be deceived easily? Although military AI technologies dominate military strategy in the US and China; But what sparked the crisis was that last March, Chinese researchers launched a brilliant, and potentially devastating, attack against one of America's most valuable technological assets, the Tesla electric car. A research team from the security laboratory of the Chinese technology giant "Tencent" has succeeded in finding several ways to deceive the artificial intelligence algorithms in the Tesla electric car by carefully changing the data, which are fed to the car's sensors, and the team managed to trick and confuse the vehicle's AI. The team tricked Tesla's brilliant algorithms capable of detecting raindrops on the windshield or following the lines on the road, operating the windshield wipers to act as if there was rain, and the lane markings on the road were modified to confuse the autonomous driving system so that it passed in the opposite traffic lane in violation of traffic rules.
- North America > United States (1.00)
- Asia > China (0.48)
- Asia > Russia (0.16)
- Europe > Russia (0.05)
- Automobiles & Trucks (1.00)
- Transportation > Ground > Road (0.80)
- Government > Military (0.77)
- (3 more...)
MLJ: A Julia package for composable machine learning
Blaom, Anthony D., Kiraly, Franz, Lienart, Thibaut, Simillides, Yiannis, Arenas, Diego, Vollmer, Sebastian J.
Statistical modeling, and the building of complex modeling pipelines, is a cornerstone of modern data science. Most experienced data scientists rely on high-level open source modeling toolboxes - such as sckit-learn [1]; [2] (Python); Weka [3] (Java); mlr [4] and caret [5] (R) - for quick blueprinting, testing, and creation of deployment-ready models. They do this by providing a common interface to atomic components, from an ever-growing model zoo, and by providing the means to incorporate these into complex workflows. Practitioners are wanting to build increasingly sophisticated composite models, as exemplified in the strategies of top contestants in machine learning competitions such as Kaggle. MLJ (Machine Learning in Julia) [18] is a toolbox written in Julia that provides a common interface and meta-algorithms for selecting, tuning, evaluating, composing and comparing machine model implementations written in Julia and other languages. More broadly, the MLJ project hopes to bring cohesion and focus to a number of emerging and existing, but previously disconnected, machine learning algorithms and tools of high quality, written in Julia. A welcome corollary of this activity will be increased cohesion and synergy within the talent-rich communities developing these tools. In addition to other novelties outlined below, MLJ aims to provide first-in-class model composition capabilities. Guiding goals of the MLJ project have been usability, interoperability, extensibility, code transparency, and reproducibility.
- Oceania > New Zealand > North Island > Auckland Region > Auckland (0.04)
- North America > United States (0.04)
- Europe > France > Bourgogne-Franche-Comté > Doubs > Besançon (0.04)